cancel
Showing results forย 
Search instead forย 
Did you mean:ย 

Zerobus Ingest: From Lakehouse to Insights Now

Published on โ€Ž01-22-2026 10:40 AM by Community Manager

Join us for a very special BrickTalks session, where we introduce Zerobus Ingest, part of Lakeflow Connect, designed to streamline your event data ingestion into the lakehouse. We will be joined by Databricks Product Manager Victoria Butka, who will explain how easy it is to eliminate a multi-hop architecture, reduce complexity and enable faster access to data insights. 

Why is this session so special? Meet the PM and join us to dive deep into the product. Ask your burning questions, and see a live end-to-end demo of Zerobus Ingest in action! 

We will also provide the tools and resources so you test drive Zerobus Ingest yourself. Feel free to share how you are using Zerobus Ingest to simplify your ingestion architecture - creative use cases welcome! 

Happy building!

Date: Thursday, Jan 29, 2026
Time: 9:00 AM Pacific
Location: Virtual - link will be sent to registrants via email prior to session. 

Register now by clicking Yes in the โ€œWill you be attending?โ€ box to reserve your spot! ->

About the speaker:

Victoria Bukta is a Product Manager at Databricks on the Lakeflow Connect team, specializing in streaming, storage, and modern Data Lakehouse infrastructure. She previously served as a Senior Product Manager at Tabular and an engineering manager at Shopify,  where she led teams in large-scale data CDC and Kafka ingestion as well as pioneered the organizational adoption of Apache Iceberg.




Add to Calendar
Starts:
Thu, Jan 29, 2026 09:00 AM PST
Ends:
Thu, Jan 29, 2026 10:00 AM PST
Location: Virtual - link will be sent to registrants via email prior to session.ย 
1 Comment
Amit_Dass
New Contributor II

Really want to understand, as we compare this with Kafka which is already matured, and please confirm if these limitations are correct for Zerobus : 1. Only managed Delta tables are supported as targets; table and column names must be ASCII letters/digits/underscores. 2. No schema evolution on the target table 3. Zerobus provides atโ€‘leastโ€‘once delivery; consumers must handle potential duplicates downstream (idempotent writes / dedup) 4. If we have multiple consumers will that support other than Databricks consumer.